This article explores how different decision tree hyperparameters affect performance and visual structure, using scikit-learn's DecisionTreeRegressor and the California housing dataset. It examines the impact of max_depth, ccp_alpha, min_samples_split, min_samples_leaf, and max_leaf_nodes, and demonstrates the use of cross-validation and BayesSearchCV for optimal hyperparameter tuning.
This article explores the impact of hyperparameters on random forests, both in terms of performance and visual representation. It compares the performance of a default random forest with tuned decision trees and examines the effects of various hyperparameters like `n_estimators`, `max_depth`, and `ccp_alpha` using visualizations of individual trees, predictions, and errors.
This article explores the application of Laplace approximated Bayesian optimization for hyperparameter tuning, focusing on regularization techniques in machine learning models. The author discusses the challenges of hyperparameter optimization, particularly in high-dimensional spaces, and presents a case study using logistic regression with L2 regularization. The article compares grid search and Bayesian optimization methods, highlighting the advantages of the latter in efficiently finding optimal regularization coefficients. It also explores the potential for individualized regularization parameters for different variables.